skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kolinko, Yaroslav"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Across basic research studies, cell counting requires significant human time and expertise. Trained experts use thin focal plane scanning to count (click) cells in stained biological tissue. This computer-assisted process (optical disector) requires a well-trained human to select a unique best z-plane of focus for counting cells of interest. Though accurate, this approach typically requires an hour per case and is prone to inter-and intra-rater errors. Our group has previously proposed deep learning (DL)-based methods to automate these counts using cell segmentation at high magnification. Here we propose a novel You Only Look Once (YOLO) model that performs cell detection on multi-channel z-plane images (disector stack). This automated Multiple Input Multiple Output (MIMO) version of the optical disector method uses an entire z-stack of microscopy images as its input, and outputs cell detections (counts) with a bounding box of each cell and class corresponding to the z-plane where the cell appears in best focus. Compared to the previous segmentation methods, the proposed method does not require time-and labor-intensive ground truth segmentation masks for training, while producing comparable accuracy to current segmentation-based automatic counts. The MIMO-YOLO method was evaluated on systematic-random samples of NeuN-stained tissue sections through the neocortex of mouse brains (n=7). Using a cross validation scheme, this method showed the ability to correctly count total neuron numbers with accuracy close to human experts and with 100% repeatability (Test-Retest). 
    more » « less
  2. Tomaszewski, John E.; Ward, Aaron D. (Ed.)
    Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available at 
    more » « less
  3. Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method. 
    more » « less
  4. Išgum, Ivana; Colliot, Olivier (Ed.)
  5. de la Torre, Jack (Ed.)
    Background: Microcirculatory factors play an important role in amyloid-β (Aβ)-related neuropathology in Alzheimer’s disease (AD). Transgenic (Tg) rat models of mutant Aβ deposition can enhance our understanding of this microvascular pathology. Objective: Here we report stereology-based quantification and comparisons (between- and within-group) of microvessel length and number and associated parameters in hippocampal subregions in Tg model of AD in Fischer 344 rats and non-Tg littermates. Methods: Systematic-random samples of tissue sections were processed and laminin immunostained to visualize microvessels through the entire hippocampus in Tg and non-Tg rats. A computer-assisted stereology system was used to quantify microvessel parameters including total number, total length, and associated densities in dentate gyrus (DG) and cornu ammonis (CA) subregions. Results: Thin hair-like capillaries are common near Aβ plaques in hippocampal subregions of Tg rats. There are a 53% significant increase in average length per capillary across entire hippocampus (p≤0.04) in Tg compared to non-Tg rats; 49% reduction in capillary length in DG (p≤0.02); and, higher microvessel density in principal cell layers (p≤0.03). Furthermore, within-group comparisons confirm Tg but not non-Tg rats have significant increase in number density (p≤0.01) and potential diffusion distance (p≤0.04) of microvessels in principal cell layers of hippocampal subregions. Conclusion: We show the Tg deposition of human Aβ mutations in rats disrupts the wild-type microanatomy of hippocampal microvessels. Stereology-based microvascular parameters could promote the development of novel strategies for protection and the therapeutic management of AD. 
    more » « less
  6. null (Ed.)